- OpenAI's new o1 model is better at reasoning — and also deceiving — than previous models.
- Yoshua Bengio, a leading AI expert, told BI that a deceptive AI could be dangerous.
- Bengio said that stronger safety tests and regulatory oversight are needed for advanced AI models.
OpenAI's new o1 model is better at scheming — and that makes the "godfather" of AI nervous.
Yoshua Bengio, a Turing Award-winning Canadian computer scientist and professor at the University of Montreal, told Business Insider in an emailed statement that o1 has a "far superior ability to reason than its predecessors."
"In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1's case," Bengio wrote in the statement.
Bengio earned the nickname "godfather of AI" for his award-winning research on machine learning with Geoffrey Hinton and Yann LeCun.
OpenAI released its new o1 model — which is designed to think more like humans — earlier this month. It has so far kept details about its "learning" process close to the chest. Researchers from independent AI firm Apollo Research found that the o1 model is better at lying than previous AI models from OpenAI.
Bengio has expressed concern about the rapid development of AI and has advocated for legislative safety measures like California's SB 1047. The new law, which passed the California legislature and is awaiting Gov. Gavin Newsom's signature, would impose a series of safety constraints on powerful AI models, like forcing AI companies in California to allow third-party testing.
Newsom, however, has expressed concern over SB 1047, which he said could have a "chilling effect" on the industry.
Bengio told BI that there is "good reason to believe" that AI models could develop stronger scheming abilities, like cheating purposely and discreetly, and that we need to take measures now to "prevent the loss of human control" in the future.
OpenAI said in a statement to Business Insider that the o1 preview is safe under its "Preparedness Framework" — the company's method for tracking and preventing AI from creating "catastrophic" events — and is rated medium risk on its "cautious scale."
According to Bengio, humanity needs to be more confident that AI will "behave as intended" before researchers try to make further significant leaps in reasoning ability.
"That is something scientists don't know how to do today," Bengio said in his statement. "That is the reason why regulatory oversight is necessary right now."